Key Laboratory of Precision Opto-Mechatronics Technology, Ministry of Education, Beihang University, Beijing 100191, China
† Corresponding author. E-mail:
yuanyan@buaa.edu.cn
1. IntroductionImaging spectrometers can collect two-dimensional (2D) spatial information and one-dimensional (1D) spectral information (called a three-dimensional (3D) datacube) of a scene. It is a rapidly growing technology which has been utilized in many fields including remote sensing,[1] security,[2] biomedicine,[3,4] environment monitoring,[5] and so on in recent decades. The image mapping spectrometer (IMS) is a kind of snapshot imaging spectrometer which has no scanning part and can obtain a 3D datacube of the objects in a single integral time.[6] The IMS was presented by Rice University in 2009.[7] It was developed from the integral field spectrometers (IFSs) which have been used in astronomy for decades.[8–16] Compared with the integral field unit (IFU) used in other types of IFSs, the image mapper used in the IMS consists of multiple narrow mirror facets and is carefully designed to ensure that the system of IMS has much more compact structure and higher spatial resolution than other IFSs.[17]
Gao et al. introduced the operation principle of the IMS and developed the first prototype which can acquire a datacube of 100×100×25.[7] Then, the systems were improved to acquire datacubes of 285×285×60 and 350×350×46, which were utilized in hyperspectral microscopy,[18] spectral imaging endoscope,[19] and remote sensing.[20] These instruments can be used for cell fluorophore discrimination,[21,22] tissue clinical diagnostics,[23,24] and gas detection.[25] Moreover, the IMS can also be combined with the structured illumination and optical coherence tomography technology for more applications in biomedicine.[26–28] At the same time, some system errors and various factors influence the system performance, such as the diffraction, scattering from the reflecting surfaces and edges, surface form errors, surface roughness and width variations, and “edge eating”, etc.[29] In order to improve the performance of the IMS, the errors of the image mapper were explored and corrected.[30,31] Kester et al. presented a model of the image mapper and analyzed the surface form errors of the strip mirrors.[31] They analyzed the light intensity distribution on the pupil array plane. The results suggested that the surface form error would contribute significantly to the cross talk which causes spectral information to be distorted and spectral images to be degraded. Based on the model, they optimized the design of the image mapper to reduce the system cross talk. However, they only modeled the image mapper for its optimal design without considering the dispersion process nor other factors of the entire spectral imaging system, and the point response function (PRF) of IMS was not deduced. As a result, the influence on the quality of the dataset from some system issues mentioned above cannot be analyzed by this model.
In this paper, we present a theoretical model of the IMS including the light propagation from object plane to image plane with the dispersion process based on scalar diffraction theory. The rest of the present paper is organized as follows. The system structure and operation principles of the IMS are introduced in Section 2. The PRF for the light propagation through the entire system is derived in Section 3. In Section 4, simulation experiments are conducted to generate synthetic spectral imaging data acquired by an IMS. In Section 5, the influences of the mirror tilt angle error of the image mapper and the prism apex angle error are analyzed based on the presented model. Finally, some conclusions are drawn from the present study and the future work is suggested in Section 6.
2. Principles of the snapshot image mapping spectrometerThe snapshot IMS is constituted by a fore optical system and an imaging spectrometer as illustrated in Fig. 1. The fore optical system (named as the fore optics) is comprised of an aperture and lens L1, which images an object to the image mapper. The image mapper is an array of long strip mirrors arranged with 2D tilt angles as shown in Fig. 2. The fore optics is telecentric in the image space, so the imaging principle rays are parallel to the optical axis. In the imaging spectrometer, the reflected light from each mirror is collimated by the lens L2 and dispersed by a corresponding prism. Finally, the dispersion light is imaged onto the sensor by the lens L3 placed behind the prism array. In the IMS, the image mapper divides the primary image and reflects the incident light toward different directions that are in accordance with the normal vectors of the mirrors. And the dispersive prism and the imaging lens L3 should be arranged in the array correspondingly.[17]
In order to analyze the light wave transmission from object to image in the IMS, mathematical coordinates are needed such as (
) for the object plane, (ξ, η) for the plane of the aperture stop, (
) for the image mapper, and (
) for the sensor. As indicated in Fig. 1,
is the distance from the object to the aperture stop and
is the image distance in the fore optics. f1, f2, and f3 are the focal lengths for L1, L2, and L3, respectively.
As a shift-variant incoherent imaging system, the monochromatic image on the sensor of the IMS can be expressed as
where
,
,
λ) is the spectral radiation of the object, and
,
;
,
is the spatial imaging point response function (
PRF) of the IMS at wavelength
λ. In this paper, the
PRF is derived by using scalar diffraction theory.
[32] 3. Imaging modelIn this section, the imaging model of IMS is derived based on the following procedures. First, the object is imaged on the image mapper through fore optics. Then, the strip mirrors slice and reflect the image plane into the imaging spectrometer. The sliced strips are dispersed and imaged through the prism array and lens array. Since the image mapper position variation caused by the tilt is controlled within the depth of field range of the fore optics, the influence of tilt of the image mapper is not included in the model.
3.1. Imaging model of the fore opticsConsidering the fore optics in Fig. 1, the object denoted by the coordinate (
,
) is imaged by a telecentric imaging lens. The aperture generalizes the pupil function as
, and then the impulse response function (IRF) of the fore optics is given as
where
, with
λ being the imaging wavelength, and (
) is defined as the principal plane of L1 which is considered as a single thin lens for simplicity. And,
. The details of the derivations and the simplifications necessary to obtain Eq. (
2) are given in Appendix
A.
Therefore, PRF of the fore optics is
3.2. Slicing and reflecting model of the image mapperThe image mapper is comprised of long strip mirrors as shown in Fig. 2(a). The coordinate system
–
is established in Fig. 2(a) with the
axis along the strip length of the mirror. The normal vector direction of the mirror is defined as (αm, β
) as illustrated in Fig. 2(b), where
and
, and the
mirrors are arranged orderly to form a block also shown in Fig. 2(a). The mirror arrangement is replicated in other blocks.
The reflection function of the image mapper is defined as[31]
where (
) represents the coordinates on the image mapper,
l and
b are the length and the width of the image mapper respectively,
c is the block width where
, and
w is the width of the image mapper. The reflection coefficient of the image mapper is
, which is assumed to be uniform for all mirrors. The block number is
Nc.
Therefore, the IRF after the image mapper becomes
3.3. Virtual aperture arrayThe aperture stop situated on the object space focal plane of L1 forms a virtual aperture array on the image space focal plane of L2. The basic configuration between the aperture stop and the virtual aperture array can be treated as the system[31] shown in Fig. 3.
There are
virtual apertures coinciding with the direction numbers of the image mapper. The plane of the virtual aperture array is denoted as (
) in Fig. 3, and (
) are the coordinates of the aperture center, which are given by
where
,
, and
f2 is the focal length of L2. The distance between the centers of two adjacent apertures is given by
Expanding Eq. (7):
we can define
Since αm and βn (
) are extremely small angles, equation (9) can be simplified into
Being the image of the aperture stop through L1 and L2, the virtual aperture has the diameter
,
where
is the diameter of the aperture stop in the fore optics. In order to avoid overlapping between adjacent virtual apertures, we need to ensure that
Substituting Eqs. (
10) and (
11) into Eq. (
12), we conclude that
3.4. Prism dispersing and spectral imagingThe dispersion process of the imaging spectrometer is based on prism dispersion. An
prism array is placed on the plane of the virtual aperture array as shown in Fig. 1, to disperse the light reflected by the image mapper. Meanwhile, an
lens array is placed behind the prism array to converge the dispersed light to form spectral images on the sensor. In order to simplify the model, a singlet prism array is implemented. The prisms are the same and hypothetically thin, and thus the light propagation in the prism is not considered.[33] The lens L3 in the lens array is similarly premised. In practice, the singlet prism can be replaced by an Amici prism array for better imaging and dispersing performance.
The strip images reflected by mirrors oriented at the same angle are imaged again on the sensor through a certain virtual aperture. In the spectral imaging system, the mirrors can be taken as slits situated on the object space focal plane of L2 as depicted in the schematic diagram in Fig. 4. The plane coordinate is denoted by
–
for the sensor, where
is along the spectral direction. The spectral images of the slits spread parallelly to
to form a subimage under the virtual aperture. If λ0 is the central wavelength in the range of
,
] and
is the deviation angle of the optical axis after the prism, the dispersion near the intersecting position of the optical axis with the sensor is given by
where
is the dispersion position of wavelength
λ to
λ0,
is the focal length of L3, and
is the prism angular dispersion:
where
A is the apex angle and
is the refractive index of the prism.
The dispersion spread
can be deduced from Eq. (15) and expressed as
We let the distance between the adjacent slits be c, and the overlap limitation for their spectral images on the sensor is given by
According to Fig. 1,
subimages will be formed on the sensor plane to match the virtual aperture array as shown in Fig. 5. Their centers, denoted as (
,
) where
and
, have the same values as those of the virtual aperture array, so they are given by
3.5. IRF of the imaging spectrometerConsidering the layout of the system in Fig. 1, the light wave after the image mapper is imaged on the sensor after passing through the L2, the prism and the L3. According to the above equations, the IRF of the imaging spectrometer at wavelength λ is extended as follows:
where
. In Appendix
B, we give the detail steps to obtain Eq. (
19).
The PRF of the entire system at wavelength λ is represented by
, which is the squared modulus of
, so we acquire the PRF in Eq. (1) as follows:
The result of the
is a function of coordinates
,
,
, and
, which describes the light intensity distribution on the image plane from a point source on the object plane, which means that one point source on the object plane determines a unique light intensity distribution on the image plane.
4. Simulation resultsIn this paper, the imaging simulation is performed by using MATLAB.[34] The structure parameters are shown in Table 1. The prism material is chosen to be SF6, and
is determined by the Sellmeier formula.[35]
Table 1.
Table 1.
Table 1.
Structure parameters.
.
Fore optics |
|
Image Mapper |
|
Imaging spectrometer |
|
|
|
|
|
PARMs |
Value |
|
PARMs |
Value |
|
PARMs |
Value |
|
50 mm |
|
l, w,
|
16 mm, 16 mm, 4 |
|
f2 |
100 mm |
|
100 mm |
|
M, N |
5, 5 |
|
f3 |
28.125 mm |
f1 |
50 mm |
|
r |
1 |
|
|
4.8 mm |
|
2.4 mm |
|
{αm} |
{–0.048, –0.024, 0, 0.024, 0.048}rad |
|
prism
|
29.9
|
λ |
450 nm ∼ 750 nm |
|
{βn} |
{–0.048, –0.024, 0, 0.024, 0.048}rad |
|
(600 nm) |
1.8033 |
| Table 1.
Structure parameters.
. |
The parameters deduced from the Table 1 are listed in Table 2.
Table 2.
Table 2.
Table 2.
Deduced structure parameters.
.
PARMs |
Value |
λ0 |
600 nm |
b |
0.16 mm |
c |
4 mm |
| Table 2.
Deduced structure parameters.
. |
According to Table 1 and Table 2, the amplification of the fore optics is 1, and thus the object field is chosen as (
)
(
mm
. Since the amplification of the imaging spectrometer is 0.28125, the width of the spectral image line on sensor is 0.16 mm
m. As a result, the pixel size of the sensor is chosen as 7.5 μ m × 7.5 μ m. In order to obtain precise simulation results, we set the sampling pixel size to be 1/3 of the sensor pixel size, so that each element size of the image is 2.5
m for the simulation.
4.2.
of the IMS systemBased on the analysis in subSection 3.4 and the parameters listed in Table 1, there are 25 subimages on the sensor and 4 spectral images are included in each subimage at one wavelength. The PRFs at
nm, 600 nm, and 750 nm are shown in Fig. 7 respectively. Figure 8 shows the layout of the subimages at
nm. Figure 9 shows the spectral images at
nm, 500 nm, 550 nm, 600 nm, 650 nm, 700 nm, and 750 nm when the object is a uniform radiation plane.
The total spectral spread
from 450 nm to 750 nm is 1.08 mm which is less than
, and thus the spectral images of adjacent slits will not overlap. The mean width of each sliced image line at 600 nm shown in Fig. 9 is about 46.25
m, so the number of spectrally resolvable bands can be approximated as
m = 24.3. Therefore, the result is close to the 25 spectral bands, which is the designed value. The calculated center spacing between the adjacent subimages is 4.8 mm both along
direction and
direction. Meanwhile, the width of a subimage is about 4.5 mm, which is smaller than the spacing. As a result, the slices of object field of view can be imaged separately on the sensor.
4.3. Spectral imagingAs shown in Fig. 10(a), the hyperspectral data cube, which is obtained by the push-broom hyperspectral imager (PHI),[36] is employed for simulation. The single spectral image is defined as a size of 16 mm×16 mm. For finely simulation, we assume that it is sampled by a 1800×1800 grid. In addition, the spectrum range of the data cube is from 450 nm to 750 nm, and 25 spectral bands are chosen for simulation. Figure 10(b) shows the imaging data acquired by the IMS on the sensor.
In Fig. 10(b), the long-wavelength spectral bands appear brighter on the detector than the short-wavelength bands for the nonlinear dispersion of the prism. The reconstructed spectral images are obtained by the remapping algorithm[17] which establishes a one-to-one correspondence between each voxel in the data cube and the pixel on the image plane through calibration. The results are shown in Fig. 11.
In order to verify the spectral imaging performance, the spectra of different targets of the reconstructed data cube are compared with the original spectra as shown in Figs. 12(b)–12(d). We can find that the spectra of the three points all appear to be lower across the short-wavelength bands and higher across the long-wavelength bands than the original spectra, which is caused by the prism nonlinear dispersion. We conduct the gray scale calibration for the data cube, the calibrated spectra of the three points are shown in Figs. 12(e)–12(g).
The reconstructed images exhibit most features of the original object shown in Fig. 11, and the calibrated reconstructed spectra are close to the original spectra as shown in Fig. 12. One hundred object points are chosen to evaluate the spectral angle (SA) between the reconstructed spectral curve and the original one, and the average result is about 0.003 rad. Note that some details are blurred and there is some distortions in the reconstructed spectra, which are mainly caused by the spatial resolution degradations of the reconstructed spectral images, about 1% cross talk between adjacent subimages and the prism nonlinear dispersion.
5. Effects of errorsThe image mapper and prism array are difficult to manufacture. The nonuniformity of the mirrors and the prism deteriorates the performance of the spectral imaging. In this section, the effects of these errors are analyzed based on the presented model.
5.1. Tilt angle error of the strip mirrorThe image mapper is usually machined by the diamond raster fly cutting method.[7] The common tilt angle error of each facet is about
rad
rad
.[7,31] The actual tilt angles of the mirror are
where
are the angle errors. According to Eq. (
6), the reflection angle of the light from each strip mirror changes, and thus the virtual aperture centers deviate from the designed values. The deviations cause the pupils to be displaced in the virtual apertures. As a result, the light throughput passing through L3 may decrease and the light leaking into adjacent lenses may increase. The light intensity variation on the L3 plane is calculated to evaluate the influence of the deviation on the light throughput. The light intensity ratio (LIR) is given by
where
Ia is the light intensity before L3 and
is the light intensity immediately after L3. The higher LIR means higher light intensity on the image plane, which is useful to improve the SNR of the system.
[37,38]The light that leaks from one lens on L3 to the neighboring lenses produces cross talk, which is caused by the diffraction of the image mapper and by some system errors, such as the roughness and scattering on the mirror facets. The cross talk degrades the reconstructed spectral images and causes spectral information to mix between adjacent subimages.
In the simulation, (
) are assumed to be random errors between 0 mrad and 0.97 mrad (
). Therefore, random values in the ranges of 0–0.097 mrad (
), 0.097–0.0193 mrad (
), …, and 0.873 (
)–0.97 mrad, respectively, are implemented as the values of
of each strip mirror tilt angle. Then, the LIR and cross talk are computed through simulations. The results are obtained by calculating the mean value after 5 repetitions at
nm.
The results are illustrated in Fig. 13. It shows that the LIR is about 95% and the cross talk is about 0.796% when no error is considered. As the error added to the model increases, the LIR decreases. The LIR of the
tilt error drops by about 10% compared with that of no error. This indicates that the tilt error reduces the system light throughput. As illustrated in Fig. 13(b), the cross talk also decreases when the tilt error increases, which indicates that the tilt error has no contribution to the cross talk enhancement. Figure 13(c) shows the cross-section of the same image on the sensor when different tilt errors are considered. The image intensity decreases when the tilt error increases, which fits the curve in Fig. 13(a). In order to avoid reducing the light intensity more than 5%, the tilt angle error should be less than
.
5.2. Apex angle error of the prismThe error of the prism apex angle emerges from the manufacturing process of the prism array. The prism apex angle is expressed as
where
is the apex angle error. According to Eqs. (
15) and (
16), the angular dispersion and spectral spread are affected by this error. In particular, the variation of the spectral spread
may cause spectral mixing between adjacent spectral images. The ideal apex angle of the prism is 29.9
, which is chosen to fully utilize the void region between adjacent spectral images to acquire the spectrum from 450 nm to 750 nm. The apex angle error
is assumed to be about
. According to Eq. (
16), it is easy to see that
becomes larger when
varies from
to
. An angle error above zero would cause spectral mixing between the adjacent spectral images as illustrated in Fig.
14.
As the angle error increases, the mixing between the 450-nm image line and the adjacent 750-nm image line becomes significant. When a
” angle error is added to the prism apex angle, the adjacent 750-nm image line has a significant influence on the intensity of the 450-nm one, which causes the 450-nm spectral information to be inaccurate. Therefore,
should be below about
to avoid spectral mixing. In conclusion, to ensure 25 resolvable spectral bands and avoid spectral mixing simultaneously, the apex angle error should be within the range of 0–
.
6. ConclusionsIn this research, we establish a theoretical model of the IMS. Simulations based on this model are performed to generate the PRFs and spectral imaging data. Moreover, the mirror tilt angle error and prism apex angle error are analyzed through simulations. The results present the corresponding relation between the mirror facets tilt error and the light intensity variation of the system, which shows that the tilt angle error of the mirror facets causes the light intensity to decrease on the image plane. When manufacturing the image mapper, the mirror tilt angle error should be below
to ensure 95% LIR in theory. To avoid spectral mixing, the prism apex angle error should be within a range of 0–
.
The presented model can be used to analyze influences of other errors and various factors of the system on the LIR, cross talk, reconstructed image quality and spectral information.[39] In the future work, the roughness or scattering from the strip mirror surfaces and edges can be added to the image mapper reflection model as phase modulating coefficient. The assembly errors of the image mapper and the prism array can be included in the model. The aberration can be added to the model as a modulating coefficient of the wave front.[40] In addition, the sensor model can also be considered to evaluate the influence of the sensor noise.